Goto

Collaborating Authors

 dangerous ai


Coming AI regulation may not protect us from dangerous AI

#artificialintelligence

Offering no criteria by which to define unacceptable risk for AI systems and no method to add new high-risk applications to the Act if such applications are discovered to pose a substantial danger of harm. This is particularly problematic because AI systems are becoming broader in their utility. Only requiring that companies take into account harm to individuals, excluding considerations of indirect and aggregate harms to society. An AI system that has a very small effect on, e.g., each person's voting patterns might in the aggregate have a huge social impact. Permitting virtually no public oversight over the assessment of whether AI meets the Act's requirements.


This is how dangerous AI can be.

#artificialintelligence

Artificial intelligence is revolutionizing the world around us. Artificial Intelligence will not only impact how we use technology, it will transform how we live, interact with others and work. It's set to create new industries that didn't exist before and disrupt existing ones. AlphaGo was a watershed moment. It was the first time an AI won against a top human player in a game that requires intuition and creativity.


UN report calls for regulation of potentially dangerous AI

#artificialintelligence

The UN is calling for a moratorium on artificial intelligence systems "that pose a serious risk to human rights" until research and regulation has been done. It published a report today after concerns that countries and businesses are adopting AI without proper diligence. High commissioner for human rights Michelle Bachelet said that AI can be a "force for good" but stressed that it can still have a profoundly negative, "even catastrophic" effect if used without consideration.The report analyses the ways AI can affect human rights, including privacy, health, and education as well as freedom of movement, expression, and assembly. "Artificial intelligence now reaches into almost every corner of our physical and mental lives and even emotional states," Bachelet writes. "AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online."Bachelet's report says that because of its rapid growth, finding out how AI collects, stores, and uses data is "one of the most urgent human rights questions we face.""The risk of discrimination linked to AI-driven decisions—decisions that can change, define, or damage human lives—is all too real," the report continues. "This is why there needs to be systematic assessment and monitoring of the effects of AI systems to identify and mitigate human rights risks."The UN also calls for significantly more transparency from companies and countries that develop and use AI systems. It's important to note that the UN is not calling for an outright ban—no one got spooked by their latest viewing of The Terminator—just regulation and greater transparency. Bachelet says, "We cannot afford to continue playing catch-up regarding AI—allowing its use with limited or no boundaries or oversight, and dealing with the almost inevitable human rights consequences after the fact. The power of AI to serve people is undeniable, but so is AI's ability to feed human rights violations at an enormous scale with virtually no visibility. Action is needed now to put human rights guardrails on the use of AI, for the good of all of us."You can read the press release and full report on the UN's website.What effect this could have on videogames and similar technology is unclear, though the UN is obviously not talking about AI machine learning like Nvidia's DLSS or AI upscaling tech. One possible way this could affect videogames is if a company develops an AI system that learns how specific people play games and then use that data to present targeted microtransactions, ads, or other things to encourage you to spend money—similar to the method Activision patented in 2017. Whether the UN would consider that a violation of your rights is not something I can answer, but in the end, the report's call for regulation probably doesn't impact videogames in a meaningful way for players—even if you wish it would call to improve the AI in games like Cyberpunk 2077. 


Can We Just Turn Off Dangerous AI?

#artificialintelligence

There's this meme out there to make people who care about artificial intelligence safety look crazy. If AI ever starts doing something that might destroy humanity, we'll just shut it off. AI requires power and computers to function. So, if machines start building nuclear weapons of their own free will or turning everything into paper clips, we'll have no problem. Some prominent voices on the topic try to refute this argument by saying we don't know the types of extremely strong arguments and persuasion a hyper-evolved machine can produce.



He co-founded Skype. Now he's spending his fortune on stopping dangerous AI.

#artificialintelligence

If you've ever used Skype or shared files on Kazaa back in the early '00s, you've encountered the work of Jaan Tallinn. And if humans wind up creating machines that surpass our own intelligence, and we live to tell about it -- we might have Tallinn's philanthropy, in small part, to thank. Tallinn, whose innovations earned him tens of millions of dollars, was one of the first donors to take seriously arguments that advanced artificial intelligence poses a threat to human existence. He has come to believe we might be entering the first era in human history where we are not the dominant force on the planet, and that as we hand off our future to advanced AI, we should be damned sure its morality is aligned with our own. He has donated more than $600,000 to the Machine Intelligence Research Institute, a prominent organization working on "AI alignment" (that is, aligning the interests of an AI with the interests of human society) and more than $310,000 to the Future of Humanity Institute at Oxford, which works on similar subjects.


How businesses can avoid dangerous AI

#artificialintelligence

Artificial intelligence (AI) is one of the technologies that will dominate the business, consumer and public sector landscape over the next few years. Technologists predict that, in the not-too-distant future, we will be surrounded by internet-connected objects capable of tending to our every need. While AI development is still in its early stages, this technology has already shown it's capable of competing with human intelligence. From challenging humans at chess to writing computer code, this technology can already outperform people in many areas. Newer AI systems can even learn on the fly to solve complex problems more quickly and intuitively.


DeepMind unveils the world's first test to assess dangerous AI's and algorithms

#artificialintelligence

Earlier this year a group of world experts convened to discuss Doomsday scenarios and ways to counter them. The problem though was that they found discussing the threats humanity faces easy, but as for solutions, well, in the majority of cases they were stumped. This week DeepMind, Google's world famous Artificial Intelligence (AI) arm, in a world first, announced they have an answer to the potential AI apocalypse predicted by the group and leading luminaries ranging from Elon Musk to Stephen Hawking, whose fears of a world dominated by AI powered "killer robots" have been hitting the headlines all year, in the form of a test that can assess how dangerous AI's and algorithms really are, or, more importantly, could become. In the announcement, which was also followed up by a paper on the topic, DeepMind said they'd managed to develop a test that would help people assess the safety of new AI algorithms that will power everything from self-driving cars, and cancer treatments to biometric security solutions and voice recognition, as well as those infamous autonomous robots and autonomous weapons systems, and DeepMind's lead researcher, Jan Leike, said that AI algorithms that don't pass their test are probably "pretty dangerous." The test in question is a series of 2D video games in a chessboard like plane made out of pixel blocks that the researchers call "GridWorld" that puts AI's through a series of games in order to evaluate nine safety features that, when combined, can then be used to determine how dangerous an AI is, whether it can modify itself and if it can cheat the game.


Humanity Must 'Jail' Dangerous AI to Avoid Doom, Expert Says

AITopics Original Links

Super-intelligent computers or robots have threatened humanity's existence more than once in science fiction. Such doomsday scenarios could be prevented if humans can create a virtual prison to contain artificial intelligence before it grows dangerously self-aware. Keeping the artificial intelligence (AI) genie trapped in the proverbial bottle could turn an apocalyptic threat into a powerful oracle that solves humanity's problems, said Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky. But successful containment requires careful planning so that a clever AI cannot simply threaten, bribe, seduce or hack its way to freedom. "It can discover new attack pathways, launch sophisticated social-engineering attacks and re-use existing hardware components in unforeseen ways," Yampolskiy said. "Such software is not limited to infecting computers and networks -- it can also attack human psyches, bribe, blackmail and brainwash those who come in contact with it."


Saving Humanity From Dangerous Artificial Intelligence Scenario – The Startup

#artificialintelligence

More and more people are becoming aware that truly smart things are already here and we are definitely seeing a massive trend of artificial intelligence being used across commercial products. This makes people anxious, especially after watching a couple of episodes of Westworld. And knowing there is an AI in your todo list or in Alexa device on your kitchen table doesn't help that feeling at all. Often we hear of the bright minds of our world talking about existential threats and dangers of AI in a vague manner. We talk about implications but we rarely sit down and actually talk through possible simple solutions how to prevent inevitable scenarios.